The Commonly Used Lru Replacement Policy Causes Thrashing for Memory- Intensive Workloads. a Simple Mechanism That Dynamically Changes the Insertion Policy Used by Lru Replacement Reduces Cache Misses by 21 Percent and Requires a Total Storage Overhead of Less Than
نویسندگان
چکیده
......One of the major limiters of computer system performance has been the access to main memory, which is typically two orders of magnitude slower than the processor. To bridge this gap, modern processors already devote more than half the on-chip transistors to the last-level cache (in our studies, the L2 cache). These designs typically use the least-recently-used (LRU) replacement policy, or its approximations, for managing all levels of the cache hierarchy. Because smaller levels of the cache hierarchy filter out temporal locality, the access stream of the last-level cache has very little temporal locality. As a result, the LRU policy causes a significant percentage of cache lines in the last-level cache to remain unused after cache insertion. We refer to cache lines that are not reused between insertion and eviction as zero-reuse lines. Figure 1 shows that for the baseline 1-Mbyte, 16-way, LRU-managed L2 cache, more than half the lines installed in the cache are never reused before being evicted. Thus, the LRU policy results in inefficient use of L2 cache space because most of the inserted lines occupy cache space without contributing to cache hits. Zero-reuse lines occur because of two reasons: First, the line has no temporal locality, which means that the line is never re-referenced. It is not beneficial to insert such lines in the cache. Second, the line is re-referenced at a distance greater than the cache size, so the LRU policy evicts the line before it is reused. For example, if a workload frequently reuses a working set of 2 Mbytes, and the available cache size is 1 Mbyte, the LRU policy will evict all the inserted lines before they are reused, causing zero reuse for almost all the lines. Figure 2 shows misses per thousand instructions (MPKI) for two memory-intensive benchmarks (art and mcf) when the cache size is varied under the LRU policy. Art frequently uses a 1.3Mbyte data structure, and mcf frequently uses a 3.5-Mbyte data structure. For the baseline 1-Mbyte cache, the LRU replacement policy causes thrashing for these workloads and the cache lines are evicted before being reused. Zero-reuse lines account for more than 90 percent of the installed cache lines for these two workloads, indicating inefficient use of cache space. Moinuddin K. Qureshi
منابع مشابه
Reduction in Cache Memory Power Consumption based on Replacement Quantity
Today power consumption is considered to be one of the important issues. Therefore, its reduction plays a considerable role in developing systems. Previous studies have shown that approximately 50% of total power consumption is used in cache memories. There is a direct relationship between power consumption and replacement quantity made in cache. The less the number of replacements is, the less...
متن کاملReduction in Cache Memory Power Consumption based on Replacement Quantity
Today power consumption is considered to be one of the important issues. Therefore, its reduction plays a considerable role in developing systems. Previous studies have shown that approximately 50% of total power consumption is used in cache memories. There is a direct relationship between power consumption and replacement quantity made in cache. The less the number of replacements is, the less...
متن کاملLeast recently plus five least frequently replacement policy (LR+5LF)
In this paper, we present a new block replacement policy in which we proposed a new efficient algorithm for combining two important policies Least Recently Used (LRU) and Least Frequently Used (LFU). The implementation of the proposed policy is simple. It requires limited calculations to determine the victim block. We proposed our models to implement LRU and LFU policies. The new policy gives e...
متن کاملAn Improvement in WRP Block Replacement Policy with Reviewing and Solving its Problems
One of the most important items for better file system performance is efficient buffering of disk blocks in main memory. Efficient buffering helps to reduce the widespeed gap between main memory and hard disks. In this buffering system, the block replacement policy is one of the most important design decisions that determines which disk block should be replaced when the buffer is full. To o...
متن کاملAn Improvement in WRP Block Replacement Policy with Reviewing and Solving its Problems
One of the most important items for better file system performance is efficient buffering of disk blocks in main memory. Efficient buffering helps to reduce the widespeed gap between main memory and hard disks. In this buffering system, the block replacement policy is one of the most important design decisions that determines which disk block should be replaced when the buffer is full. To o...
متن کامل